Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.897
Filtrar
1.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600132

RESUMO

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Assuntos
Córtex Auditivo , Localização de Som , Córtex Visual , Percepção Visual/fisiologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Córtex Visual/fisiologia , Estimulação Luminosa/métodos , Estimulação Acústica/métodos
2.
Artigo em Chinês | MEDLINE | ID: mdl-38561257

RESUMO

Objective: This study investigates the effect of signal-to-noise ratio (SNR), frequency, and bandwidth on horizontal sound localization accuracy in normal-hearing young adults. Methods: From August 2022 to December 2022, a total of 20 normal-hearing young adults, including 7 males and 13 females, with an age range of 20 to 35 years and a mean age of 25.4 years, were selected to participate in horizontal azimuth recognition tests under both quiet and noisy conditions. Six narrowband filtered noise stimuli were used with central frequencies (CF) of 250, 2 000, and 4 000 Hz and bandwidths of 1/6 and 1 octave. Continuous broadband white noise was used as the background masker, and the signal-to-noise ratio (SNR) was 0, -3, and -12 dB. The root-mean-square error (RMS error) was used to measure sound localization accuracy, with smaller values indicating higher accuracy. Friedman test was used to compare the effects of SNR and CF on sound localization accuracy, and Wilcoxon signed-rank test was used to compare the impact of the two bandwidths on sound localization accuracy in noise. Results: In a quiet environment, the RMS error in horizontal azimuth in normal-hearing young adults ranged from 4.3 to 8.1 degrees. Sound localization accuracy decreased with decreasing SNR: at 0 dB SNR (range: 5.3-12.9 degrees), the difference from the quiet condition was not significant (P>0.05); however, at -3 dB (range: 7.3-16.8 degrees) and -12 dB SNR (range: 9.4-41.2 degrees), sound localization accuracy significantly decreased compared to the quiet condition (all P<0.01). Under noisy conditions, there were differences in sound localization accuracy among stimuli with different frequencies and bandwidths, with higher frequencies performing the worst, followed by middle frequencies, and lower frequencies performing the best, with significant differences (all P<0.01). Sound localization accuracy for 1/6 octave stimuli was more susceptible to noise interference than 1 octave stimuli (all P<0.01). Conclusions: The ability of normal-hearing young adults to localize sound in the horizontal plane in the presence of noise is influenced by SNR, CF, and bandwidth. Noise with SNRs of ≥-3 dB can lead to decreased accuracy in narrowband sound localization. Higher CF signals and narrower bandwidths are more susceptible to noise interference.


Assuntos
Localização de Som , Percepção da Fala , Masculino , Feminino , Humanos , Adulto Jovem , Adulto , Ruído , Razão Sinal-Ruído , Audição
3.
J Acoust Soc Am ; 155(4): 2460-2469, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38578178

RESUMO

Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.


Assuntos
Auxiliares de Audição , Localização de Som , Estimulação Acústica , Movimentos da Cabeça
4.
Otol Neurotol ; 45(4): 392-397, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38478407

RESUMO

OBJECTIVE: To assess cochlear implant (CI) sound processor usage over time in children with single-sided deafness (SSD) and identify factors influencing device use. STUDY DESIGN: Retrospective, chart review study. SETTING: Pediatric tertiary referral center. PATIENTS: Children with SSD who received CI between 2014 and 2020. OUTCOME MEASURE: Primary outcome was average daily CI sound processor usage over follow-up. RESULTS: Fifteen children with SSD who underwent CI surgery were categorized based on age of diagnosis and surgery timing. Over an average of 4.3-year follow-up, patients averaged 4.6 hours/day of CI usage. Declining usage trends were noted over time, with the first 2 years postactivation showing higher rates. No significant usage differences emerged based on age, surgery timing, or hearing loss etiology. CONCLUSIONS: Long-term usage decline necessitates further research into barriers and enablers for continued CI use in pediatric SSD cases.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Perda Auditiva Unilateral , Localização de Som , Percepção da Fala , Humanos , Criança , Implantes Cocleares/efeitos adversos , Estudos Retrospectivos , Perda Auditiva Unilateral/cirurgia , Perda Auditiva Unilateral/reabilitação , Localização de Som/fisiologia , Surdez/cirurgia , Surdez/reabilitação , Percepção da Fala/fisiologia , Resultado do Tratamento
5.
Trends Hear ; 28: 23312165241229880, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38545645

RESUMO

Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.


Assuntos
Implante Coclear , Implantes Cocleares , Localização de Som , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Audição , Localização de Som/fisiologia
6.
Trends Hear ; 28: 23312165241235463, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38425297

RESUMO

Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.


Assuntos
Transtornos da Percepção Auditiva , Localização de Som , Realidade Virtual , Adulto , Humanos , Testes Auditivos
7.
Neuropsychologia ; 196: 108822, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38342179

RESUMO

Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.


Assuntos
Localização de Som , Animais , Humanos , Localização de Som/fisiologia , Cegueira , Ruído , Acústica , Sinais (Psicologia) , Mascaramento Perceptivo , Estimulação Acústica/métodos
8.
Trends Hear ; 28: 23312165231217910, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38297817

RESUMO

The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.


Assuntos
Localização de Som , Adulto , Criança , Adolescente , Humanos , Movimentos Oculares , Audição , Testes Auditivos , Movimentos da Cabeça
9.
Trends Hear ; 28: 23312165241229572, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38347733

RESUMO

Subjective reports indicate that hearing aids can disrupt sound externalization and/or reduce the perceived distance of sounds. Here we conducted an experiment to explore this phenomenon and to quantify how frequently it occurs for different hearing-aid styles. Of particular interest were the effects of microphone position (behind the ear vs. in the ear) and dome type (closed vs. open). Participants were young adults with normal hearing or with bilateral hearing loss, who were fitted with hearing aids that allowed variations in the microphone position and the dome type. They were seated in a large sound-treated booth and presented with monosyllabic words from loudspeakers at a distance of 1.5 m. Their task was to rate the perceived externalization of each word using a rating scale that ranged from 10 (at the loudspeaker in front) to 0 (in the head) to -10 (behind the listener). On average, compared to unaided listening, hearing aids tended to reduce perceived distance and lead to more in-the-head responses. This was especially true for closed domes in combination with behind-the-ear microphones. The behavioral data along with acoustical recordings made in the ear canals of a manikin suggest that increased low-frequency ear-canal levels (with closed domes) and ambiguous spatial cues (with behind-the-ear microphones) may both contribute to breakdowns of externalization.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Localização de Som , Percepção da Fala , Adulto Jovem , Humanos , Fala , Perda Auditiva Bilateral , Ruído , Percepção da Fala/fisiologia
10.
Trends Hear ; 28: 23312165241230947, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38361245

RESUMO

Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.


Assuntos
Localização de Som , Pessoas com Deficiência Visual , Humanos , Percepção Espacial/fisiologia , Cegueira/diagnóstico , Localização de Som/fisiologia , Acústica
11.
PLoS One ; 19(2): e0293811, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38394286

RESUMO

A hearing aid or a contralateral routing of signal device are options for unilateral cochlear implant listeners with limited hearing in the unimplanted ear; however, it is uncertain which device provides greater benefit beyond unilateral listening alone. Eighteen unilateral cochlear implant listeners participated in this prospective, within-participants, repeated measures study. Participants were tested with the cochlear implant alone, cochlear implant + hearing aid, and cochlear implant + contralateral routing of signal device configurations with a one-month take-home period between each in-person visit. Audiograms, speech perception in noise, and lateralization were evaluated. Subjective feedback was obtained via questionnaires. Marked improvement in speech in noise and non-implanted ear lateralization accuracy were observed with the addition of a contralateral hearing aid. There were no significant differences in speech recognition between listening configurations. However, the chronic device use questionnaires and the final device selection showed a clear preference for the hearing aid in spatial awareness and communication domains. Individuals with limited hearing in their unimplanted ears demonstrate significant improvement with the addition of a contralateral device. Subjective questionnaires somewhat contrast with clinic-based outcome measures, highlighting the delicate decision-making process involved in clinically advising one device or another to maximize communication benefits.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Localização de Som , Percepção da Fala , Humanos , Estudos Prospectivos , Audição
12.
Sci Rep ; 14(1): 2469, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38291126

RESUMO

Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.


Assuntos
Perda Auditiva , Localização de Som , Realidade Virtual , Humanos , Audição/fisiologia , Localização de Som/fisiologia , Testes Auditivos
13.
Artigo em Inglês | MEDLINE | ID: mdl-38227005

RESUMO

The Journal of Comparative Physiology lived up to its name in the last 100 years by including more than 1500 different taxa in almost 10,000 publications. Seventeen phyla of the animal kingdom were represented. The honeybee (Apis mellifera) is the taxon with most publications, followed by locust (Locusta migratoria), crayfishes (Cambarus spp.), and fruitfly (Drosophila melanogaster). The representation of species in this journal in the past, thus, differs much from the 13 model systems as named by the National Institutes of Health (USA). We mention major accomplishments of research on species with specific adaptations, specialist animals, for example, the quantitative description of the processes underlying the axon potential in squid (Loligo forbesii) and the isolation of the first receptor channel in the electric eel (Electrophorus electricus) and electric ray (Torpedo spp.). Future neuroethological work should make the recent genetic and technological developments available for specialist animals. There are many research questions left that may be answered with high yield in specialists and some questions that can only be answered in specialists. Moreover, the adaptations of animals that occupy specific ecological niches often lend themselves to biomimetic applications. We go into some depth in explaining our thoughts in the research of motion vision in insects, sound localization in barn owls, and electroreception in weakly electric fish.


Assuntos
Peixe Elétrico , Localização de Som , Estrigiformes , Animais , Drosophila melanogaster , Localização de Som/fisiologia , Visão Ocular , Electrophorus
14.
Otol Neurotol ; 45(3): e228-e233, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38238908

RESUMO

BACKGROUND AND OBJECTIVES: The ability to localize sounds is partly recovered in patients using a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral side. Binaural processing seems effective at least to some extent, despite the difference between electric and acoustic stimulation in each ear. To obtain further insights into the mechanisms of binaural hearing in these listeners, localization of low- and high-frequency sounds was tested. STUDY DESIGN: The study used a within-subject design, where participants were tasked with localizing sound sources in the horizontal plane. The experiment was conducted in an anechoic chamber, where an array of seven loudspeakers was mounted along the 24 azimuthal angle span from -90° to +90°. Stimuli were applied with different frequencies: broadband noise and high- and low-frequency noise. SUBJECTS: Ten CI recipients participated in the study. All had an asymmetric hearing loss with a CI in the poorer ear and an HA on the contralateral side. MAIN OUTCOME MEASURES: Accuracy of sound localization in terms of angular error and percentage of correct localization scores. RESULTS: The median angular error was 40° in bimodal conditions for both broadband noise and high-frequency noise stimuli. The angular error increased to 47° for low-frequency noise stimuli. In the unilaterally aided condition with an HA, only a median angular error of 78° was observed. CONCLUSIONS: Irrespective of the frequency composition of the stimuli, this group of bimodal listeners showed some ability to localize sounds. Angular errors were larger than those reported in the literature for bilateral CI users or single-sided deaf listeners with a CI. In the unilateral listening condition with HA, only localization of sounds was not possible for most subjects.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Localização de Som , Percepção da Fala , Humanos , Audição
15.
Hear Res ; 443: 108952, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38242019

RESUMO

The barn owl, a nocturnal raptor with remarkably efficient prey-capturing abilities, has been one of the initial animal models used for research of brain mechanisms underlying sound localization. Some seminal findings made from their specialized sound localizing auditory system include discoveries of a midbrain map of auditory space, mechanisms towards spatial cue detection underlying sound-driven orienting behavior, and circuit level changes supporting development and experience-dependent plasticity. These findings have explained properties of vital hearing functions and inspired theories in spatial hearing that extend across diverse animal species, thereby cementing the barn owl's legacy as a powerful experimental system for elucidating fundamental brain mechanisms. This concise review will provide an overview of the insights from which the barn owl model system has exemplified the strength of investigating diversity and similarity of brain mechanisms across species. First, we discuss some of the key findings in the specialized system of the barn owl that elucidated brain mechanisms toward detection of auditory cues for spatial hearing. Then we examine how the barn owl has validated mathematical computations and theories underlying optimal hearing across species. And lastly, we conclude with how the barn owl has advanced investigations toward developmental and experience dependent plasticity in sound localization, as well as avenues for future research investigations towards bridging commonalities across species. Analogous to the informative power of Astrophysics for understanding nature through diverse exploration of planets, stars, and galaxies across the universe, miscellaneous research across different animal species pursues broad understanding of natural brain mechanisms and behavior.


Assuntos
Localização de Som , Estrigiformes , Animais , Vias Auditivas , Estimulação Acústica , Audição
16.
Eur J Neurosci ; 59(7): 1770-1788, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38230578

RESUMO

Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.


Assuntos
Percepção Auditiva , Localização de Som , Estimulação Acústica , Estimulação Luminosa , Percepção Visual , Humanos
17.
PLoS One ; 19(1): e0296452, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38165991

RESUMO

To achieve human-like behaviour during speech interactions, it is necessary for a humanoid robot to estimate the location of a human talker. Here, we present a method to optimize the parameters used for the direction of arrival (DOA) estimation, while also considering real-time applications for human-robot interaction scenarios. This method is applied to binaural sound source localization framework on a humanoid robotic head. Real data is collected and annotated for this work. Optimizations are performed via a brute force method and a Bayesian model based method, results are validated and discussed, and effects on latency for real-time use are also explored.


Assuntos
Robótica , Localização de Som , Humanos , Teorema de Bayes , Acústica , Gravitação
18.
Cogn Res Princ Implic ; 9(1): 4, 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191869

RESUMO

Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.


Assuntos
Movimentos da Cabeça , Localização de Som , Humanos , Som , Comportamento Exploratório , Processos Mentais
19.
Hear Res ; 441: 108924, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38061267

RESUMO

The head-related transfer function (HRTF) describes the direction-dependent acoustic filtering by the head that occurs between a source signal in free-field space and the signal at the tympanic membrane. HRTFs contain information on sound source location via interaural differences of their magnitude or phase spectra and via the shapes of their magnitude spectra. The present study characterized HRTFs for source locations in the front horizontal plane for nine rabbits, which are a species commonly used in studies of the central auditory system. HRTF magnitude spectra shared several features across individuals, including a broad spectral peak at 2.6kHz that increased gain by 12 to 23dB depending on source azimuth; and a notch at 7.6kHz and peak at 9.8kHz visible for most azimuths. Overall, frequencies above 4kHz were amplified for sources ipsilateral to the ear and progressively attenuated for frontal and contralateral azimuths. The slope of the magnitude spectrum between 3 and 5kHz was found to be an unambiguous monaural cue for source azimuths ipsilateral to the ear. Average interaural level difference (ILD) between 5 and 16kHz varied monotonically with azimuth over ±31dB despite a relatively small head size. Interaural time differences (ITDs) at 0.5kHz and 1.5kHz also varied monotonically with azimuth over ±358 µs and ±260 µs, respectively. Remeasurement of HRTFs after pinna removal revealed that the large pinnae of rabbits were responsible for all spectral peaks and notches in magnitude spectra and were the main contribution to high-frequency ILDs (5-16kHz), whereas the rest of the head was the main contribution to ITDs and low-frequency ILDs (0.2-1.5kHz). Lastly, inter-individual differences in magnitude spectra were found to be small enough that deviations of individual HRTFs from an average HRTF were comparable in size to measurement error. Therefore, the average HRTF may be acceptable for use in neural or behavioral studies of rabbits implementing virtual acoustic space when measurement of individualized HRTFs is not possible.


Assuntos
Pavilhão Auricular , Localização de Som , Animais , Coelhos , Estimulação Acústica , Orelha Externa , Som
20.
Hear Res ; 441: 108922, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38043403

RESUMO

The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.


Assuntos
Percepção de Movimento , Localização de Som , Humanos , Localização de Som/fisiologia , Som , Tempo de Reação/fisiologia , Movimento (Física) , Movimento , Percepção de Movimento/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...